15 research outputs found
A Wavelet-Based Approach To Monitoring Parkinson's Disease Symptoms
Parkinson's disease is a neuro-degenerative disorder affecting tens of
millions of people worldwide. Lately, there has been considerable interest in
systems for at-home monitoring of patients, using wearable devices which
contain inertial measurement units. We present a new wavelet-based approach for
analysis of data from single wrist-worn smart-watches, and show high detection
performance for tremor, bradykinesia, and dyskinesia, which have been the major
targets for monitoring in this context. We also discuss the implication of our
controlled-experiment results for uncontrolled home monitoring of freely
behaving patients.Comment: ICASSP 201
Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations
Latent factor models for recommender systems represent users and items as low
dimensional vectors. Privacy risks of such systems have previously been studied
mostly in the context of recovery of personal information in the form of usage
records from the training data. However, the user representations themselves
may be used together with external data to recover private user information
such as gender and age. In this paper we show that user vectors calculated by a
common recommender system can be exploited in this way. We propose the
privacy-adversarial framework to eliminate such leakage of private information,
and study the trade-off between recommender performance and leakage both
theoretically and empirically using a benchmark dataset. An advantage of the
proposed method is that it also helps guarantee fairness of results, since all
implicit knowledge of a set of attributes is scrubbed from the representations
used by the model, and thus can't enter into the decision making. We discuss
further applications of this method towards the generation of deeper and more
insightful recommendations.Comment: International Conference on Pattern Recognition and Method
Privacy-Preserving Decision Trees Training and Prediction
In the era of cloud computing and machine learning, data has become a highly valuable resource. Recent history has shown that the benefits brought forth by this data driven culture come at a cost of potential data leakage. Such breaches have a devastating impact on individuals and industry, and lead the community to seek privacy preserving solutions. A promising approach is to utilize Fully Homomorphic Encryption (FHE) to enable machine learning over encrypted data, thus providing resiliency against information leakage. However, computing over encrypted data incurs a high computational overhead, thus requiring the redesign of algorithms, in an ``FHE-friendly manner, to maintain their practicality.
In this work we focus on the ever-popular tree based methods (e.g., boosting, random forests), and propose a new privacy-preserving solution to training and prediction for trees. Our solution employs a low-degree approximation for the step-function together with a lightweight interactive protocol, to replace components of the vanilla algorithm that are costly over encrypted data. Our protocols for decision trees achieve practical usability demonstrated on standard UCI datasets encrypted with fully homomorphic encryption. In addition, the communication complexity of our protocols is independent of the tree size and dataset size in prediction and training, respectively, which significantly improves on prior works
Privacy-Preserving Decision Tree Training and Prediction against Malicious Server
Privacy-preserving machine learning enables secure outsourcing of machine learning tasks to an untrusted service provider (server) while preserving the privacy of the user\u27s data (client). Attaining good concrete efficiency for complicated machine learning tasks, such as training decision trees, is one of the challenges in this area. Prior works on privacy-preserving decision trees required the parties to have comparable computational resources, and instructed the client to perform computation proportional to the complexity of the entire task.
In this work we present new protocols for privacy-preserving decision trees, for both training and prediction, achieving the following desirable properties:
1. Efficiency: the client\u27s complexity is independent of the training-set size during training, and of the tree size during prediction.
2. Security: privacy holds against malicious servers.
3. Practical usability: high accuracy, fast prediction, and feasible training demonstrated on standard UCI datasets, encrypted with fully homomorphic encryption.
To the best of our knowledge, our protocols are the first to offer all these properties simultaneously.
The core of our work consists of two technical contributions. First, a new low-degree polynomial approximation for functions, leading to faster protocols for training and prediction on encrypted data. Second, a design of an easy-to-use mechanism for proving privacy against malicious adversaries that is suitable for a wide family of protocols, and in particular, our protocols; this mechanism could be of independent interest